RAG Evaluation: Using Claude Skills to craft a truly challenging Q&A set
To truly evaluate a RAG system, public benchmarks aren’t enough—you need datasets that reflect real-world difficulty, including multi-hop questions, wide reasoning, and heterogeneous knowledge bases. In this article, we show how we designed a char-based, chunk-agnostic framework with easy/medium difficulty levels and human validation, leveraging Claude Skills and LLM retrievers to generate questions, answers, and evidence spans. The result is a public dataset based on D&D SRD 5.2.1 (plus two internal datasets), built for reproducible and comparable RAG pipeline testing.