Skip to content
View Rachneet's full-sized avatar
🎯
Focusing
🎯
Focusing

Block or report Rachneet

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Rachneet/README.md

Hi there, I'm Rachneet Sachdeva πŸ‘‹

Portfolio LinkedIn Google Scholar Email

About Me

I'm a Ph.D. researcher in Natural Language Processing at the UKP Lab, TU Darmstadt, working under the supervision of Prof. Dr. Iryna Gurevych. My research focuses on building safe, explainable, and trustworthy AI systems, with particular emphasis on:

πŸ”¬ LLM Safety & Adversarial Robustness - Designing jailbreak attacks and defenses to improve model safety
🎯 Hallucination Detection & Mitigation - Reducing errors in long-form question answering systems
πŸš€ Production-Ready AI - Shipping scalable NLP infrastructure and low-latency AI services
πŸ“Š Model Explainability - Making AI decisions transparent and interpretable

πŸ”₯ Recent Highlights

  • πŸ“ 2 papers accepted to ACL and EMNLP 2025 on hallucination detection and LLM jailbreaking
  • πŸ† Co-led development of UKP-SQuARE, a QA platform used by 1000+ users
  • πŸ›‘οΈ Designed POATE, a contrastive reasoning-based jailbreak attack achieving 40% higher success rates
  • πŸ€– Built DocChat, a multi-agent RAG system with hybrid retrieval and verification agents

πŸ› οΈ Tech Stack

Languages & Frameworks
Python PyTorch HuggingFace TensorFlow LangChain

Tools & Infrastructure
Docker Kubernetes AWS FastAPI MongoDB

Specializations

  • πŸ€– LLMs: GPT-4, LLaMA, T5, BART
  • πŸ”— RAG Systems: LangChain, LangGraph, LlamaIndex, MCP
  • πŸ“Š Experiment Tracking: Weights & Biases, LangSmith
  • πŸ” Evaluation: Adversarial Testing, Explainability, Calibration

πŸ’Ό Professional Experience

Ph.D. Student @ UKP Lab, TU Darmstadt (Sep 2021 - Present)
Working on LLM safety, hallucination detection, and explainable AI

ML Engineer Intern @ Convaise (Feb 2021 - Jun 2021)
Built automated ML platform reducing deployment time from 2 days to 10 minutes

Research Assistant @ RWTH Aachen University (May 2018 - Apr 2020)
Analyzed 80M+ Amazon reviews for gender bias detection using deep learning

Systems Engineer @ Infosys Limited (Jun 2015 - Aug 2017)
Automated testing and CI/CD, increasing deployment frequency by 400%

🀝 Open Source & Community

  • πŸ” Reviewer for ACL Rolling Review (ARR)
  • πŸ‘¨β€πŸ« Teaching Assistant for NLP Ethics course (100+ students)
  • πŸŽ“ Mentor to 13+ BSc/MSc students at TU Darmstadt
  • 🌟 Open to collaborations on trustworthy AI and NLP safety research

πŸ“Š GitHub Stats

GitHub Stats GitHub Streak
Top Languages

πŸ“« Get in Touch


"Building AI systems that are not just powerful, but trustworthy and explainable."

⭐️ From Rachneet

Pinned Loading

  1. UKP-SQuARE/square-core UKP-SQuARE/square-core Public

    SQuARE: Software for question answering research.

    Python 75 13

  2. UKPLab/emnlp2025-poate-attack UKPLab/emnlp2025-poate-attack Public

    Code associated with "Turning Logic Against Itself : Probing Model Defenses Through Contrastive Questions".

    Python 4 1

  3. UKPLab/acl2025-lfqa-hallucination UKPLab/acl2025-lfqa-hallucination Public

    Code and data for ACL 2025 paper "Localizing and Mitigating Errors in Long-form Question Answering"

    Python 5

  4. UKPLab/eacl2024-catfood UKPLab/eacl2024-catfood Public

    Enhancing small language models with LLM generated counterfactuals.

    Python 5

  5. prompt-engineering prompt-engineering Public

    Guide to prompting large language models.

    Jupyter Notebook

  6. research-assistant-chatbot research-assistant-chatbot Public

    Python