Artwork

Content provided by Zeta Alpha. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Zeta Alpha or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Benchmarking IR Models (w/ Nandan Thakur)

21:55
 
Share
 

Manage episode 430843781 series 3446693
Content provided by Zeta Alpha. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Zeta Alpha or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of Neural Search Talks, we're chatting with Nandan Thakur about the state of model evaluations in Information Retrieval. Nandan is the first author of the paper that introduced the BEIR benchmark, and since its publication in 2021, we've seen models try to hill-climb on the leaderboard, but also fail to outperform the BM25 baseline in subsets like Touché 2020. Plus some insights into what the future of benchmarking IR systems might look like, such as the newly announced TREC RAG track this year.

Timestamps: 0:00 Introduction & the vibe at SIGIR'24 1:19 Nandan's two papers at the conference 2:09 The backstory of the BEIR benchmark 5:55 The shortcomings of BEIR in 2024 8:04 What's up with the Touché 2020 subset of BEIR 11:24 The problem with overfitting on benchmarks 13:09 TREC-RAG: the future of IR benchmarking 17:34 MIRACL & the importance of multilinguality in IR 21:38 Outro

  continue reading

14 episodes

Artwork
iconShare
 
Manage episode 430843781 series 3446693
Content provided by Zeta Alpha. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Zeta Alpha or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of Neural Search Talks, we're chatting with Nandan Thakur about the state of model evaluations in Information Retrieval. Nandan is the first author of the paper that introduced the BEIR benchmark, and since its publication in 2021, we've seen models try to hill-climb on the leaderboard, but also fail to outperform the BM25 baseline in subsets like Touché 2020. Plus some insights into what the future of benchmarking IR systems might look like, such as the newly announced TREC RAG track this year.

Timestamps: 0:00 Introduction & the vibe at SIGIR'24 1:19 Nandan's two papers at the conference 2:09 The backstory of the BEIR benchmark 5:55 The shortcomings of BEIR in 2024 8:04 What's up with the Touché 2020 subset of BEIR 11:24 The problem with overfitting on benchmarks 13:09 TREC-RAG: the future of IR benchmarking 17:34 MIRACL & the importance of multilinguality in IR 21:38 Outro

  continue reading

14 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide