Rankarena: A unified platform for evaluating retrieval, reranking and rag with human and llm feedback
Feb 10, 2025·,
,,,
Abdelrahman Abdallah
Mahmoud Abdalla
Bhawna Piryani
Jamshid Mozafari
Mohammed Ali
Adam Jatowt
Abstract
Evaluating the quality of retrieval-augmented generation (RAG) and document reranking systems remains challenging due to the lack of scalable, user-centric, and multi-perspective evaluation tools. We introduce RankArena, a unified platform for comparing and analysing the performance of retrieval pipelines, rerankers, and RAG systems using structured human and LLM-based feedback as well as for collecting such feedback. RankArena supports multiple evaluation modes: direct reranking visualisation, blind pairwise comparisons with human or LLM voting, supervised manual document annotation, and end-to-end RAG answer quality assessment. It captures fine-grained relevance feedback through both pairwise preferences and full-list annotations, along with auxiliary metadata such as movement metrics, annotation time, and quality ratings. The platform also integrates LLM-as-a-judge evaluation, enabling comparison between model-generated rankings and human ground truth annotations. All interactions are stored as structured evaluation datasets that can be used to train rerankers, reward models, judgment agents, or retrieval strategy selectors.
Type
Publication
CIKM ‘25: Proceedings of the 34th ACM International Conference on Information and Knowledge Management